1 Problem A - Euler-Maruyama Method

We implement a discretization for the Black-Scholes-Merton (BSM) price dynamics with the parameter set \((s_0, T, \mu, \sigma) = (12, 1, 0.03, 0.17)\), with \(n = 250\) steps. We generate \(M_1 = 10\), \(M_2 = 100\), \(M_3 = 1000\), \(M_4 = 10000\) and \(M_5 = 100000\) paths. The paths are plotted in separate figures for the three first cases, together with the mean value of the price process.

set.seed(061999)
s0 <- 12
T1 <- 1
mu <- 0.03
sigma <- 0.17
n <-  250
t <- seq(0, T1, length.out = n+1) # n steps means n+1 points in this vector. 
h <- t[2] - t[1] # h = T1/n gives the same result. 
M1 <- 10
M2 <- 100
M3 <- 1000
M4 <- 10000
M5 <- 100000

The price dynamics are implemented with two different methods for calculating each path. The R function appended has the parameter version, which is used to select which version one wants to use to calculate each price path; onestep or EM. The first method implemented is the one step solution. Since the Wiener process has the distribution \(W_t \sim N(0,t), \forall t \in (0, t]\), and the variables \(W_{t_1} - W_{t_0}, \hspace{0.1em} W_{t_2} - W_{t_1}, \ldots, \hspace{0.1em} W_{t_n} - W_{t_{n-1}}\) are independent, we can think of such a process as a cumulative sum of normally distributed random variables. Thus, on our uniform discrete grid

\[ 0 = t_0 < \ldots < t_n = T, \]

we can simulate the following relation

\[ \begin{split} X(t_0) &= 0, \\ X(t_1) &\sim N(0,t_1) = N(0, t_0+h) \overset{t_0 = 0}{=} N(0,h),\\ X(t_2) &\sim N(0,t_2) = N(0, t_0+2h) \overset{t_0 = 0}{=} N(0,2h) \overset{\text{Indep.}}{=} N(0, h) + N(0,h) = 2X(t_1), \\ &\vdots \\ X(t_n) &\sim N(0, t_n) = N(0,t_0 + nh) = nX(t_1). \end{split} \]

where \(X\) represents the Wiener process, \(n\) is the number of steps in the discretization and \(h = \frac{T}{n}\) is the step size. Thus, we simulate \(n+1\) \(N(0,h)\) variables and use their cumulative sum, at each grid point, as a random draw from the Brownian motion. We insert this, replacing \(W_t\), into the solution of the SDE

\[ \mathrm{d}S_t = \mu S_t \mathrm{d}t + \sigma S_t\mathrm{d}W_t, \]

which is

\[ S_t = S_0 e^{\mu t}e^{\sigma W_t-\frac{\sigma^2}{2}t}, \]

which makes it possible to calculate the one step solution on the grid. The second method implemented is the Euler-Maruyama (EM) scheme

\[ \hat{S}_{t_{j+1}} = \hat{S}_{t_{j}} + \mu h\hat{S}_{t_{j}} + \sigma \sqrt{h} Z_{t_j} \hat{S}_{t_{j}}, \]

where \(Z_{t_j}\) are standard normally distributed variables.

Notice that the two implementations give slightly different price paths and slightly different estimations. Keep in mind that the results from the onestep method are the ones shown and discussed during the remainder of the problem, even though the results using the EM method are highly comparable.

# Price path stochastic process. 
price.path <- function(s0, t, mu, sigma, version = "onestep"){
  if (version == "onestep"){
    Wt <- rnorm(n = length(t), sd = sqrt(t[2]-t[1])) # Draw length(t) normally distributed 
    #variables with mean 0 and variance h. 
    W2 <- cumsum(Wt) # Step size is constant, grid is uniform. We calculate the cumulative 
    #sum of the standard normal draw to simulate the Wiener process, which is N(0,t) at 
    #time t, i.e. the variance increases when the process is run further and further away 
    #from t = 0. 
    return(s0*exp(mu*t)*exp(sigma*W2-sigma^2/2*t)) # Return the one step solution to the SDE. 
  } else if (version == "EM") {
    n <- length(t)-1 # t is not strictly needed for this version,
    #but we simply extract n from t to make the function parameters as simple as possible. 
    #We subtract 1 because length of t is n+1. 
    Z <- rnorm(n) # Draw normally distributed variables. 
    S.values <- rep(NA, length.out = n+1) # Define vector for saving entire price path. 
    S.values[1] <- s0 # First value in price path is s0. 
    h <- t[2]-t[1] # Step length. 
    for (j in 2:(n+1)){ # Loop for EM scheme. 
      S.values[j] <- S.values[j-1] + mu*S.values[j-1]*h + sigma*S.values[j-1]*sqrt(h)*Z[j-1]
    }
    return(S.values) # Return the entire price path. 
  } else{
    stop("Please input a valid version name, either `EM` or `onestep`.")
  }
}
# Improvement: Could implement antithetic sampling in order to reduce the variance of the MC estimator. 
# This can be improved in the entire assignment. 
# Generate the price paths for all the given number of paths. 
version <- "onestep" # Choose the version of the function we want to use. 

# For each of the M_i number of paths, we calculate M_i paths using the price.path function. 
M1.paths <- matrix(rep(NA, length.out = M1*(n+1)), nrow = M1)
for (i in 1:M1){
   M1.paths[i,] <- price.path(s0 = s0, t = t, mu = mu, sigma = sigma, version = version)
}

M2.paths <- matrix(rep(NA, length.out = M2*(n+1)), nrow = M2)
for (i in 1:M2){
  M2.paths[i,] <- price.path(s0, t, mu, sigma, version)
}

M3.paths <- matrix(rep(NA, length.out = M3*(n+1)), nrow = M3)
for (i in 1:M3){
  M3.paths[i,] <- price.path(s0, t, mu, sigma, version)
}

M4.paths <- matrix(rep(NA, length.out = M4*(n+1)), nrow = M4)
for (i in 1:M4){
  M4.paths[i,] <- price.path(s0, t, mu, sigma, version)
}

M5.paths <- matrix(rep(NA, length.out = M5*(n+1)), nrow = M5)
for (i in 1:M5){
  M5.paths[i,] <- price.path(s0, t, mu, sigma, version)
}
mean.value <- s0*exp(mu*t)
matplot(t(M1.paths), type = "l", main = paste0(M1," paths"), xlab = "t in years", ylab = "S", xaxt = "n", ylim = c(8, 20))
lines(mean.value) # Drift / "mean value".
axis(1, at=seq(0,n, by = 10), labels = seq(0,1,length.out = n/10+1))

matplot(t(M2.paths), type = "l", main = paste0(M2," paths"), xlab = "t in years", ylab = "S", xaxt = "n", ylim = c(8, 20))
#lines(apply(M2.paths, 2, mean), cex = 2)
lines(mean.value) # Drift / "mean value".
axis(1, at=seq(0,n, by = 10), labels = seq(0,1,length.out = n/10+1))

matplot(t(M3.paths), type = "l", main = paste0(M3," paths"), xlab = "t in years", ylab = "S", xaxt = "n", ylim = c(8, 20))
lines(mean.value) # Drift / "mean value".
axis(1, at=seq(0,n, by = 10), labels = seq(0,1,length.out = n/10+1))

The Monte Carlo estimator for \(\hat{S}_T\) is calculated separately for each of the values of \(M_i, \hspace{0.5em} i \in \{1,2,3,4,5\}\). The 95% confidence interval (CI) is provided for each estimator. A comparison between these estimators and the analytical solution of \(\mathbb{E}(S_T)\) is done and differences are explained.

The Monte Carlo estimator for \(\hat{S}_T\) is simply calculated by averaging the values of all the different BSM price paths plotted above at time \(T = 1\). Thus, in practice, we only need the last value of the price paths to calculate this estimator (and its standard error). The \((1-\alpha) \cdot 100\% = (1-0.05)\cdot 100\% = 95\%\) CIs are calculated by finding the standard error of the values of all the different BSM price paths at time \(T\) and using the approximation given by

\[ CI_{\alpha} = \left(\hat{S}_T - z_{\alpha/2}\frac{se_{\hat{S}_T}}{\sqrt{M}}, \hat{S}_T + z_{\alpha/2}\frac{se_{\hat{S}_T}}{\sqrt{M}}\right), \] where \(se_{\hat{S}_T}\) is the aforementioned standard error, \(z_{\alpha/2}\) is the \(1-\alpha\) quantile of the standard normal distribution and \(M\) is the number of price paths simulated. This is an asymptotically valid \((1-\alpha)\cdot 100\%\) CI, which means it converges to the exact analytic value when \(M\) is increased.

# Calculate the Monte-Carlo estimator of $\hat{S}_T$ for each of the values of $M_i$.
hat_ST <- function(M){
  mean(M[,dim(M)[[2]]]) # Calculate mean over last value of all M_i paths. 
}
# This simply calculates the average over the last value of all the different paths. 

CI_ST <- function(M){
  s <- sd(M[,dim(M)[[2]]]) # Calculate sample standard deviation over last value of all M_i paths. 
  m <- hat_ST(M) # Calculate mean.
  ste <- qnorm(0.975)*s/sqrt(dim(M)[[1]])
  return(list(l = m - ste, u = m + ste))
}
M1.hat <- hat_ST(M1.paths) 
CI1 <- CI_ST(M1.paths)
M2.hat <- hat_ST(M2.paths) 
CI2 <- CI_ST(M2.paths)
M3.hat <- hat_ST(M3.paths) 
CI3 <- CI_ST(M3.paths)
M4.hat <- hat_ST(M4.paths) 
CI4 <- CI_ST(M4.paths)
M5.hat <- hat_ST(M5.paths) 
CI5 <- CI_ST(M5.paths)

col1 <- rbind(M1.hat, CI1$l, CI1$u)
col2 <- rbind(M2.hat, CI2$l, CI2$u)
col3 <- rbind(M3.hat, CI3$l, CI3$u)
col4 <- rbind(M4.hat, CI4$l, CI4$u)
col5 <- rbind(M5.hat, CI5$l, CI5$u)
hats <- cbind(col1, col2, col3, col4, col5)
colnames(hats) <- c("M1 = 10", "M2 = 100", "M3 = 1000", "M4 = 10000", "M5 = 100000") 
rownames(hats) <- c("Est.", "Lower CI", "Upper CI")
knitr::kable(hats, caption = "Monte Carlo Estimation for S_T, varying M")
Monte Carlo Estimation for S_T, varying M
M1 = 10 M2 = 100 M3 = 1000 M4 = 10000 M5 = 100000
Est. 12.79035 12.27904 12.41963 12.32897 12.36536
Lower CI 11.86917 11.90612 12.28574 12.28727 12.35226
Upper CI 13.71154 12.65195 12.55352 12.37067 12.37845

Now, what is the analytic solution of \(\mathbb{E}(S_T)\)? We know that the process of the risky asset in \(t \in [0,T]\) is distributed as

\[ S_t \sim \mathcal{L}N(\mu^{*}, \sigma^{*2}), \]

where \(\mathbb{E}(S_t) = \mu^{*}\). In addition, we know that, when \(W_t \sim N(0,t)\),

\[ X_t \sim N(\ln S_0 - \left(\frac{\sigma^2}{2} - \mu\right)t, \sigma^2t) = N(\mu_{X_t}, \sigma^2_{X_t}), \] where \(S_t = e^{X_t}\). Finally, we know that the expected value of the log-normally distributed variable \(S_t\) is \(\exp{\left(\mu_{X_t} + \frac{\sigma^2_{X_t}}{2}\right)}\). This means that the analytic solution for \(\mathbb{E}(S_T)\) is

\[ \mu^*\rvert_{t = T} = \exp{\left(\mu_{X_t} + \frac{\sigma^2_{X_t}}{2}\right)}\Bigg\rvert_{t = T} = \exp{\left(\ln S_0 -\left(\frac{\sigma^2}{2} - \mu\right)t + \frac{\sigma^2t}{2}\right)}\Bigg\rvert_{t = T} = S_0 e^{\mu t}\rvert_{t = T} = S_0 e^{\mu T}, \] which in this case has the numerical value

mean.value[length(mean.value)] # Drift / "mean value".
#> [1] 12.36545
# This can also be calculated simply by s0*exp(mu*T1).

From the results above we can clearly see that the MC estimations move closer and closer to the analytic solution when the number of paths \(M\) is increased. For \(M_5\) the estimation is precise to the first three decimals (depending on which version one uses; this is true for onestep), which I would regard as a very good estimation. We also see that the CI’s clearly change with the increase of \(M\); the lower value of the CI’s increase with \(M\) and the upper value of the CI’s decrease with \(M\), meaning that the \(95\%\) CI’s become narrower when the number of paths increase. This is what we expect from the MC theory, since the estimators are unbiased and, as stated by the strong Law of Large Numbers, the sample average converges a.s. to the true expected value.

Now we fix the number of paths \(M^* = 1000\) and vary the values of \(n\), i.e. the number of steps, while repeating the discussion done above.

M.star <- 1000
n1 <- 12
n2 <- 24
n3 <- 250
n4 <- 1000
# Find new paths. 
n1.paths <- matrix(rep(NA, length.out = M.star*(n1+1)), nrow = M.star)
t1 <- seq(0, T1, length.out = n1+1) # n steps means n+1 points in this vector. 
for (i in 1:M.star){
  n1.paths[i,] <- price.path(s0, t1, mu, sigma, version)
}

n2.paths <- matrix(rep(NA, length.out = M.star*(n2+1)), nrow = M.star)
t2 <- seq(0, T1, length.out = n2+1) # n steps means n+1 points in this vector.
for (i in 1:M.star){
  n2.paths[i,] <- price.path(s0, t2, mu, sigma, version )
}

n3.paths <- matrix(rep(NA, length.out = M.star*(n3+1)), nrow = M.star)
t3 <- seq(0, T1, length.out = n3+1) # n steps means n+1 points in this vector. 
for (i in 1:M.star){
  n3.paths[i,] <- price.path(s0, t3, mu, sigma, version)
}

n4.paths <- matrix(rep(NA, length.out = M.star*(n4+1)), nrow = M.star)
t4 <- seq(0, T1, length.out = n4+1) # n steps means n+1 points in this vector. 
for (i in 1:M.star){
  n4.paths[i,] <- price.path(s0, t4, mu, sigma, version)
}
n1.hat <- hat_ST(n1.paths) 
CI1 <- CI_ST(n1.paths)
n2.hat <- hat_ST(n2.paths) 
CI2 <- CI_ST(n2.paths)
n3.hat <- hat_ST(n3.paths) 
CI3 <- CI_ST(n3.paths)
n4.hat <- hat_ST(n4.paths) 
CI4 <- CI_ST(n4.paths)

col1 <- rbind(n1.hat, CI1$l, CI1$u)
col2 <- rbind(n2.hat, CI2$l, CI2$u)
col3 <- rbind(n3.hat, CI3$l, CI3$u)
col4 <- rbind(n4.hat, CI4$l, CI4$u)

hats <- cbind(col1, col2, col3, col4)
colnames(hats) <- c("n1 = 12", "n2 = 24", "n3 = 250", "n4 = 1000")
rownames(hats) <- c("Est.", "Lower CI", "Upper CI")
knitr::kable(hats, caption = "Monte Carlo Estimation for S_T, varying n")
Monte Carlo Estimation for S_T, varying n
n1 = 12 n2 = 24 n3 = 250 n4 = 1000
Est. 12.38563 12.27697 12.43257 12.40517
Lower CI 12.24430 12.15002 12.30242 12.27156
Upper CI 12.52696 12.40391 12.56272 12.53879

We can make several observations in this case. First of all, the estimations are hovering around the true mean value of the price process. Secondly, the increase in precision is not as regular as in table 1. For example, the point estimate yielded from \(n_1 = 12\) is the closest to the analytic solution (noting that the CI’s are large), which is unexpected, since this is the least amount of steps used. Moreover, when increasing \(n\), the lower limit of the CI decreases from \(n_1\) to \(n_2\), which is not as expected, considering the behaviour we saw in table 1. The same happens from \(n_3\) to \(n_4\). We also see an increase in the upper limit of the CI when moving from \(n_1\) (or \(n_2\)) to \(n_3\). What are the possible explanations? Of course, these results are (pseudo-) random, so changing the seed will change the results. Thus, a different seed might lead to CI’s that look more regular or closer to what we expect. However, what this signals is that the number of paths \(M\) is too low. Moreover, this also leads to the conclusion that the fineness of the grid is important to a certain extent, but if \(M\) is too small, we are not able to improve the estimations simply by calculating each price path on a finer grid. Thus it is the number of paths \(M\), and not the number of discretization points \(n\), that yields dramatic differences when its value is increased. In other words, the estimates hover around the true value when \(n\) is increased while \(M\) is fixed, but the changes seem to be more dramatic when \(n\) is fixed and \(M\) is increased. It is however important to notice that, in our experiment, \(n\) is only changed across three orders of magnitude, while \(M\) is changed across five orders of magnitude, which might lead to a somewhat biased discussion. All in all, the conclusion is that a large value of \(M\) is more important compared to a very large value of \(n\) (to a certain extent), which is in accordance with the MC theory, as mentioned earlier.

2 Problem B - Option Pricing I

set.seed(061999)
s0 <- 24
T1 <- 1.5
r <- 0.02
sigma <- 0.22
K <- 23.5
# BSM pricing formula for European call option, 
# to calculate the fair price at t = 0 in closed form.
BSM <- function(s0, T1, r, sigma, K){
  d1 <- (log(s0/K)+(r+sigma^2/2)*T1)/(sigma*sqrt(T1))
  d2 <- d1 - sigma*sqrt(T1)
  s0*pnorm(d1) - exp(-r*T1)*K*pnorm(d2)  
}

We calculate the price of a Call option with parameter set \((s_0, T, r, \sigma, K) = (24, 1.5, 0.02, 0.22, 23.5)\), using the Black-Scholes-Merton (BSM) formula.

(BSM.price <- BSM(s0, T1, r, sigma, K))
#> [1] 3.149899

The price is approximately equal to 3.15, when rounded to 2 decimals.

We implement a Monte Carlo estimator for the price of this option by simulating paths with the Euler-Maruyama method for steps \(n = 10, 100, 1000\) and paths \(M = 10, 100, 1000, 10000, 100000\).

Notice that for the path-independent options, like standard European Call and Put options, it is not necessary to save the price path as is done in my implementation. This is done to keep the function as general as possible, in order to re-use it for the rest of the assignment. To increase computational efficiency and decrease the use of memory one could simply iteratively overwrite one variable, instead of saving the entire price path history.

# Monte Carlo estimator for Euler-Maruyama. 
# This is for a Call option still (almost general) (slide 28)
MC <- function(M, n, r, s0, mu, sigma, T1, payoff.profile, set.S, ...){
  h <- T1/n # Calculate step size for use in Euler-Maruyama. 
  C.values <- rep(NA, length.out = M) # Initiate vector for saving payoffs per path. 
  for (i in 1:M){ # Initiate loop for M path.
    Z <- rnorm(n) # Generate n random variables standard normally. 
    S.values <- rep(NA, length.out = n+1) # Initiate vector for saving price path. 
    S.values[1] <- s0 # Initiate starting price at time t = 0. 
    for (j in 2:(n+1)){ # Loop for Euler-Maruyama - recursively estimate price path. 
      S.values[j] <- S.values[j-1] + mu*S.values[j-1]*h + sigma*S.values[j-1]*sqrt(h)*Z[j-1]
    }
    S <- set.S(S.values) # Set expected price used in payoff-profile. 
    # Generality is kept using a helper function set.S.
    C.values[i] <- payoff.profile(S, K) # Calculate payoff for the path and save in C.values.
  }
  return(list(MC = exp(-r*T1)*mean(C.values), path.values = exp(-r*T1)*C.values)) # Return discounted average payoff. 
  # This is the MC estimator. We also return the discounted C.values, which will be used to calculate the CIs in problem C. 
}
# Definition of helper functions to feed into MC estimator code above. 
Call.profile <- function(S, K){
  max(0, S-K)
}

# Feed the function into the MC estimator in order to calculate estimate for Call option. 
Call.set.S <- function(S.values){
  S.values[length(S.values)] # The Call option uses the last value in the list. 
  # This is standard for European Call option. 
}

Assuming that \(\mu = r\), we use the MC estimator to calculate the price of the option.

M.list <- c(10, 100, 1000, 10000, 100000)
n.list <- c(10, 100, 1000)
# Simulate paths with the MC estimator and Euler-Maruyama. 
combinations <- matrix(rep(NA, length.out = length(M.list)*length(n.list)), nrow = length(M.list))

for (i in 1:length(M.list)){
  for (j in 1:length(n.list)){
    h <- T1/n.list[j]
    combinations[i, j] <- MC(M = M.list[i], n = n.list[j], r = r, s0 = s0, mu = r, 
                  sigma = sigma, T1 = T1, payoff.profile = Call.profile, set.S = Call.set.S, K = K)$MC
  }
}
# Calculations of relative error and absolute error.
rel.errors <- (BSM.price - combinations)/BSM.price
abs.errors <- abs(BSM.price - combinations)
# Tabulate the results.
df.rel <- data.frame(rel.errors)
rownames(df.rel) <- c("M = 10", "M = 100", "M = 1000", "M = 10000", "M = 100000")
colnames(df.rel) <- c("n = 10", "n = 100", "n = 1000")

df.abs <- data.frame(abs.errors)
rownames(df.abs) <- c("M = 10", "M = 100", "M = 1000", "M = 10000", "M = 100000")
colnames(df.abs) <- c("n = 10", "n = 100", "n = 1000")

knitr::kable(df.rel, caption = "Relative Errors")
Relative Errors
n = 10 n = 100 n = 1000
M = 10 -1.0924571 0.4669412 0.5786881
M = 100 0.1971622 -0.0796902 -0.2981574
M = 1000 -0.0274607 0.0206036 0.0394080
M = 10000 -0.0193889 0.0442892 0.0199679
M = 100000 -0.0033752 -0.0016929 0.0024548
knitr::kable(df.abs, caption = "Absolute Errors")
Absolute Errors
n = 10 n = 100 n = 1000
M = 10 3.4411299 1.4708178 1.8228094
M = 100 0.6210411 0.2510161 0.9391658
M = 1000 0.0864983 0.0648991 0.1241311
M = 10000 0.0610730 0.1395064 0.0628968
M = 100000 0.0106317 0.0053325 0.0077324

How can these results be interpreted in view of \(n\) and \(M\)?

First of all, we can clearly see that, in the majority of cases, the absolute values of both types of errors decrease when the number of paths \(M\) increases. This is coherent with what we expect, as noted earlier, based on the MC estimator. The best value obtained for the relative error is \(\approx -0.0017\), meaning that we are able to get within \(0.2\%\) of the closed form solution, which is very accurate.

Secondly, we notice that it is the change in \(M\) that generally leads to more dramatic changes (often of an order of magnitude) in the (absolute values of the) errors, as already noted in Problem A. Thus, from the errors seen in these tables, we would conclude the same as in Problem A regarding the level of importance of the values \(M\) and \(n\). Increasing \(M\) (moving south in the tables), while \(n\) is fixed generally decreases the error, whereas this does not happen as frequently when increasing \(n\) (moving east in the tables), while \(M\) is fixed.

Additionally, we notice that the lowest errors tend to be found in the lower right corner of the tables, which is as expected, since this is the area of the table with the finest discretized grids (large \(n\)) and more paths (large \(M\)).

Moreover, notice that all absolute values are smaller in table 3 (relative errors) compared to their respective value in table 4 (absolute errors), since the relative errors are calculated in almost the same way as the absolute errors, where the only difference is not taking absolute values and dividing by the BSM price (\(\approx 3.15\)). Notice that if the absolute value is added in the calculation of the relative errors, the values are all the same, with the seven negative signs disappearing. I cannot find a clear pattern in the negative signs that appear in the relative errors, which means that it is not clear that some specific combinations of \(n\) and \(M\) overestimate the BSM price.

The results that are yielded for values of \(M\) smaller than 1000 are useless, since they are very bad estimations far from the BSM price. When setting \(M = 1000\) we are getting into the 5% domain of relative errors, which is further improved when \(M\) is further increased. Thus, we see that we need a (relatively) substantial amount of paths in order for the MC estimator to yield useful estimations - it is not enough with 10 or 100 paths.

3 Problem C - Option Pricing II

A Monte Carlo estimator is implemented for an Asian Call Option. This Asian Call Option averages prices every Friday. We set \(n = 252\) and assume that \(n = 1\) is Monday. This means that we average the prices \(t_5, t_{10}, t_{15}, \ldots\). The payoff profile for this option is thus \((\bar{S}-K)^+\), where \(\bar{S}\) is the arithmetic average over all the prices \(t_i, \hspace{0.2em} i \in \{5, 10, 15, \ldots\}\). We set \(M = 10000\) and use the parameter set \((s_0, T, r, \sigma, K) = (20, 1, 0.02, 0.24, 20)\).

set.seed(061999)
s0 <- 20
T1 <- 1
r <- 0.02
sigma <- 0.24
K <- 20
n <- 252
M <- 10000

# This function is fed into the MC estimator instead. 
Asian.set.S <- function(S.values){ 
  # We extract every 5th value and calculate their arithmetic mean.
  mean(S.values[seq(5, length(S.values), 5)])
}

# Feed the payoff profile into the MC estimator. 
# This is the same as the Call.profile above, so not needed!
Asian.profile <- function(S, K){
  max(0, S-K)
}

# Still assuming that mu = r.
(Asian.price <- MC(M = M, n = n, r = r, s0 = s0, mu = r, 
   sigma = sigma, T1 = T1, Asian.profile, Asian.set.S, K = K)$MC)
#> [1] 1.186436

As we can see from the result above, the price estimated via the MC estimator is 1.1864.

A Monte Carlo estimator is implemented for a Lookback option with payoff profile \((S_{max}-K)^+\), where \(S_{max}\) refers to the maximum price during the time to maturity of the option. We use the parameter set \((s_0, T, r, \sigma, K) = (22, 2, 0.02, 0.29, 21)\).

set.seed(061999)
s0 <- 22
T1 <- 2
r <- 0.02
sigma <- 0.29
K <- 21
n <- 1000
M1 <- 1000
M2 <- 10000
M3 <- 100000

# This function can be fed into the MC estimator instead. 
lookback.set.S <- function(S.values){ 
  # We return the maximum value among the S.values. 
  max(S.values)
}

# Feed the payoff profile into the MC estimator. 
# This is the same as the Call.profile above, so not needed!
lookback.profile <- function(S, K){
  max(0, S-K)
}

# We still assume that mu = r.

We calculate estimations and 95% confidence intervals of this option price for \(M_1 = 1000\), \(M_2 = 10000\) and \(M_3 = 100000\) paths.

# We calculate CIs of this option price for the different values of M.
M1.results <- MC(M = M1, n = n, r = r, s0 = s0, mu = r, 
   sigma = sigma, T1 = T1, lookback.profile, lookback.set.S, K = K)
M1.path.values <- M1.results$path.values 
M1.MC <- M1.results$MC

M2.results <- MC(M = M2, n = n, r = r, s0 = s0, mu = r, 
   sigma = sigma, T1 = T1, lookback.profile, lookback.set.S, K = K)
M2.path.values <- M2.results$path.values 
M2.MC <- M2.results$MC

M3.results <- MC(M = M3, n = n, r = r, s0 = s0, mu = r, 
   sigma = sigma, T1 = T1, lookback.profile, lookback.set.S, K = K)
M3.path.values <- M3.results$path.values 
M3.MC <- M3.results$MC
# Now we can calculate the 95% CI's. This is done using the same formula as explained earlier. 
lookback.CI <- function(M.path.values){
  z.alpha <- qnorm(0.975) # Quantile. 
  M <- length(M.path.values) # Number of paths. 
  M.MC <- mean(M.path.values) # Point estimator
  sd1 <- sd(M.path.values) # Standard error.
  ste <- z.alpha*sd1/sqrt(M) # +- value. 
  list(l = M.MC - ste, u = M.MC + ste) # Return CI. 
}

CI1 <- lookback.CI(M1.path.values)
CI2 <- lookback.CI(M2.path.values)
CI3 <- lookback.CI(M3.path.values)

col1 <- rbind(M1.MC, CI1$l, CI1$u)
col2 <- rbind(M2.MC, CI2$l, CI2$u)
col3 <- rbind(M3.MC, CI3$l, CI3$u)

hats <- cbind(col1, col2, col3)
colnames(hats) <- c("M1 = 1000", "M2 = 10000", "M3 = 100000") 
rownames(hats) <- c("Est.", "Lower CI", "Upper CI")
knitr::kable(hats, caption = "Monte Carlo Estimation for Lookback Option, Varying M")
Monte Carlo Estimation for Lookback Option, Varying M
M1 = 1000 M2 = 10000 M3 = 100000
Est. 9.402367 9.044822 9.177744
Lower CI 8.911186 8.892006 9.128017
Upper CI 9.893549 9.197637 9.227471

Similar observations as in earlier problems can be made concerning how these CI’s change with increasing \(M\); they become narrower with increasing number of paths, which is the behavior we expect. Notice however that in some of the cases, the lower CI limit decreases with increasing \(M\) and in some of the cases the upper CI limit increases with increasing \(M\). If I was to choose the fair price of this Lookback option to one decimal place, perhaps to use as a first estimate in a practical application, I would set it to \(9.2\), since this is within the CI of the price estimated with the largest \(M\). The \(95\%\) CI’s can be interpreted as; out of all the \(95\%\) CI’s calculated from this “experiment” or procedure, \(95\%\) of them should contain the true fair price of the option.